45 research outputs found

    Huge networks, tiny faulty nodes

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 87-91).Can one build, and efficiently use, networks of arbitrary size and topology using a "standard" node whose resources, in terms of memory and reliability, do not need to scale up with the complexity and size of the network? This thesis addresses two important aspects of this question. The first is whether one can achieve efficient connectivity despite the presence of a constant probability of faults per node/link. Efficient connectivity means (informally) having every pair of regions connected by a constant fraction of the independent, entirely non-faulty paths that would be present if the entire network were fault free - even at distances where each path has only a vanishingly small probability of being fault-free. The answer is yes, as long as some very mild topological conditions on the high level structure of the network are met - informally, if the network is not too "thin" and if it does not contain too many large "holes". The results go against some established "empyrical wisdom" in the networking community. The second issue addressed by this thesis is whether one can route efficiently on a network of arbitrary size and topology using only a constant number c of bits/node (even if c is less than the logarithm of the network's size!). Routing efficiently means (informally) that message delivery should only stretch the delivery path by a constant factor. The answer again is yes, as long as the volume of the network grows only polynomially with its radius (otherwise, we run into established lower bounds). This effectively captures every network one may build in a universe (like our own) with finite dimensionality using links of a fixed, maximum length and nodes with a fixed, minimum volume. The results extend the current results for compact routing, allowing one to route efficiently on a much larger class of networks than had previously been known, with many fewer bits.by Enoch Peserico.Ph.D

    psort

    Get PDF
    psort \ue8 stato il pi\uf9 veloce software di ordinamento per macchine di classe PC dal 2008 al 2011 (benchmark Pennysort, http://sortbenchmark.org) e un suo adattamento per cluster ha migliorato il record per il benchmark datamation di quasi un ordine di grandezza nel 2011. Il rapporto tecnico ufficiale si trova sul sito sortbenchmark.org (che cataloga i pi\uf9 efficienti software di ordinamento per varie categorie di task/hardware - originariamente mantenuto dal premio Turing Jim Gray) all'URL http://sortbenchmark.org/psort_2011.pdf -- Ulteriori dettagli si possono trovare nelle pubblicazioni: P. Bertasi, M. Bressan, E. Peserico. psort, yet another fast stable sorting software, ACM Journal of Experimental Algorithmics, vol. 16, 2011 -- P. Bertasi, M. Bonazza, M. Bressan, E. Peserico. Datamation: a quarter of a century and four orders of magnitude later. Proc. of IEEE CLUSTER 201

    Flashcrowding in tiled multiprocessors under thermal constraints

    No full text
    This work argues that, in the face of growing thermal constraints, under an increasing number of scenarios the most effective tiled processor design is one that can support efficient flashcrowding: in a nutshell, placing on a chip far more computational power than it can sustain for extended periods of time, and concentrating computation into a few transient hotspots

    (More) Efficient Pruning of Wireless Ad Hoc Networks

    No full text
    To what extent can one "prune" the links in a wireless network while retaining (almost) the same connectivity achieved when each node is connected to all nodes within its communication radius? In a nutshell, if each node explores a portion of its neighborhood sufficient to reach 48 log(1/\u3b5) other nodes in expectation, w.h.p. all but a fraction \u3b5 of the network joins the same connected component. Each node can then "locally" choose to maintain (at most) 4 links, and the network still retains w.h.p. essentially the same level of connectivity

    Elastic paging

    No full text
    We study a generalization of the classic paging problem where memory capacity can vary over time - a property of many modern computing realities, from cloud computing to multi-core and energy-optimized processors. We show that good performance in the "classic" case provides no performance guarantees when memory capacity fluctuates: roughly speaking, moving from static to dynamic capacity can mean the difference between optimality within a factor 2 in space, time and energy, and suboptimality by an arbitrarily large factor. Surprisingly, several classic paging algorithms still perform remarkably well, maintaining that factor 2 optimality even if faced with adversarial capacity fluctuations - without taking those fluctuations into explicit account

    The Lazy Adversary Conjecture fails

    No full text
    We prove that, in general, the lazy adversary conjecture fails. Moreover, it fails in a very strong sense: an adversary which is even "slightly lazy" can perform arbitrarily worse than one which is not

    Paging with Arbitrary Associativity

    No full text
    We tackle the problem of online paging on two level memories with arbitrary associativity (including victim caches, skewed caches etc.). We show that some important classes of paging algorithms are not competitive on a wide class of associativities (even with arbitrary resource augmentation) and that although some algorithms designed for full associativity are actually competitive on any two level memory, the myopic behavior of paging algorithms designed for full associativity will generally result in very poor performance at least for some "associativity topologies". At the same time we present a simple and yet powerful technique that allows us to overcome this shortcoming, generalizing algorithms designed for full associativity into practical algorithms which are efficient on two level memories with arbitrary associativity. We identify a simple topological parameter, pseudo associativity, which characterizes the competitive ratio achievble on any two level memory, giving a lower bound on the competitiveness achievable by any paging algorithm and matching it within a factor 4 with a novel algorithm

    The Lazy Adversary Conjecture Fails

    No full text
    In the context of competitive analysis of online algorithms for the k-server problem, it has been conjectured that every randomized, memoryless online algorithm exhibits the highest competitive ratio against an offline adversary that is lazy, i.e., that will issue requests forcing it to move one of its own servers only when this is strictly necessary to force a move on the part of the online algorithm. We prove that, in general, this lazy adversary conjecture fails. Moreover, it fails in a very strong sense: there are adversaries which perform arbitrarily better than any other adversary which is even slightly \u201clazier.\u201
    corecore